Search Results: "bab"

15 June 2023

Shirish Agarwal: Ayisha, Manju Warrier, Debutsav, Books

Ayisha After a long time I saw a movie that I enjoyed wholeheartedly. And it unexpectedly touched my heart. The name of the movie is Ayisha. The first frame of the movie itself sets the pace where we see Ayisha (Manju Warrier) who decides to help out a gang as lot of women were being hassled. So she agrees to hoodwink cops and help launder some money. Then she is shown to work as a maid for an elite Arab family. To portray a Muslim character in these polarized times really shows guts especially when the othering of the Muslim has been happening 24 7. In fact, just few days back I was shocked to learn that Muslim homes were being marked as Jews homes had been marked in the 1930 s. Not just homes but also businesses too. And after few days in a total hypocritical fashion one of the judges says that you cannot push people to buy or not buy from a shop. This is after systemically doing the whole hate campaign for almost 2 weeks. What value the judge s statements are after 2 weeks ??? The poison has already seeped in  But I m drifting from the topic/movie.

The real fun of the movie is the beautiful relationship that happens between Ayisha and Mama, she is the biggest maternal figure in the house and in fact, her command is what goes in the house. The house or palace which is the perfect description is shown as being opulent but not as rich as both Mama and Ayisha are, spiritually and emotionally both giving and sharing of each other. Almost a mother daughter relationship, although with others she is shown as having a bit of an iron hand. Halfway through the movie we come to know that Ayisha was also a dramatist and an actress having worked in early Malayalam movies. I do not want to go through all the ups and downs as that is the beauty of the movie and it needs to be seen for that aspect. I am always sort of in two worlds where should I promote a book or series or movie or not because most of the time it is the unexpected that works. When we have expectation it doesn t. Avatar, the Way of the Water is an exception, not many movies I can recall like that where I had expectations and still the movie surpassed it. So maybe go with no expectations at all

Manju Warrier Manju Warrier should actually be called Manju Warrior as she chose to be with the survivor rather than the sexism that is prevalent in the Malayalam film industry which actually is more or less a mirror of Bollywood and society as whole. These three links should give enough background knowledge as to what has been happening although I m sure my Malayalam friends would more than add to that knowledge whatever may be missing. In quite a few movies, the women are making inroads without significant male strength. Especially Manju s movies have no male lead for the last few movies. Whether that is deliberate part on Manju or an obstacle being put in front of her. Anyone knows that having a male lead and a female lead enriches the value of a movie quite a bit. This doesn t mean one is better than the other but having both enriches the end product, as simple as that. This is sadly not happening. Having POSH training and having an ICC is something that each organization should look forward for. It s kind of mandatory need of hour, especially when we have young people all around us. I am hopeful that people who are from Kerala would shed some more background light on what has been happening.

Books I haven t yet submitted an application for Debconf. But my idea is irrespective of whether or not I m there, I do hope we can have a library where people can donate books and people can take away books as well. A kind of circular marketplace/library where just somebody notes what books are available. Even if 100 odd people are coming to Debconf that easily means 100 books of various languages. That in itself would be interesting and to see what people are reading, wanting to discuss etc. We could even have readings. IIRC, in 2016 we had a children s area, maybe we could do some readings from some books to children which fuels their imagination. Even people like me who are deaf would be willing to look at excerpts and be charmed by them. For instance, in all my forays of fantasy literature except for Babylon Steel I haven t read one book that has a female lead character and I have read probably around 100 odd fantasy books till date. Not a lot but still to my mind, is a big gap as far as literature is concerned. How would more women write fantasy if they don t have heroes to look forward to :(. Or maybe I may be missing some authors and characters that others know and I do not. Do others feel the same or this question hasn t even been asked ??? Dunno. Please let me know.

Debutsav So apparently Debutsav is happening 2 days from now. While I did come to know about it few days back I had to think whether I want to apply for this or apply for Debconf as I physically, emotionally can t do justice to both even though they are a few months apart. I wish all the best for the attendees as well as presenters sharing all the projects and hopefully somebody shares at least some of the projects that are presented there so we may know what new projects or softwares to follow or whatever. Till later.

14 June 2023

Freexian Collaborators: Monthly report about Debian Long Term Support, May 2023 (by Roberto C. S nchez)

Like each month, have a look at the work funded by Freexian s Debian LTS offering.

Debian LTS contributors In May, 18 contributors have been paid to work on Debian LTS, their reports are available:
  • Abhijith PA did 6.0h (out of 6.0h assigned and 8.0h from previous period), thus carrying over 8.0h to the next month.
  • Anton Gladky did 6.0h (out of 8.0h assigned and 7.0h from previous period), thus carrying over 9.0h to the next month.
  • Bastien Roucari s did 17.0h (out of 17.0h assigned and 3.0h from previous period), thus carrying over 3.0h to the next month.
  • Ben Hutchings did 17.0h (out of 16.0h assigned and 8.0h from previous period), thus carrying over 7.0h to the next month.
  • Chris Lamb did 18.0h (out of 18.0h assigned).
  • Daniel Leidert did 0.0h (out of 0h assigned and 12.0h from previous period), thus carrying over 12.0h to the next month.
  • Dominik George did 0.0h (out of 0h assigned and 20.34h from previous period), thus carrying over 20.34h to the next month.
  • Emilio Pozuelo Monfort did 32.0h (out of 18.5h assigned and 16.0h from previous period), thus carrying over 2.5h to the next month.
  • Guilhem Moulin did 20.0h (out of 8.5h assigned and 11.5h from previous period).
  • Holger Levsen did 0.0h (out of 0h assigned and 10.0h from previous period), thus carrying over 10.0h to the next month.
  • Lee Garrett did 0.0h (out of 0h assigned and 40.5h from previous period), thus carrying over 40.5h to the next month.
  • Markus Koschany did 34.5h (out of 34.5h assigned).
  • Roberto C. S nchez did 18.25h (out of 20.5h assigned and 11.5h from previous period), thus carrying over 13.75h to the next month.
  • Scarlett Moore did 20.0h (out of 20.0h assigned).
  • Sylvain Beucler did 34.5h (out of 29.0h assigned and 5.5h from previous period).
  • Thorsten Alteholz did 14.0h (out of 14.0h assigned).
  • Tobias Frost did 16.0h (out of 15.0h assigned and 1.0h from previous period).
  • Utkarsh Gupta did 5.5h (out of 5.0h assigned and 26.0h from previous period), thus carrying over 25.5h to the next month.

Evolution of the situation In May, we have released 34 DLAs. Several of the DLAs constituted notable security updates to LTS during the month of May. Of particular note were the linux (4.19) and linux-5.10 packages, both of which addressed a considerable number of CVEs. Additionally, the postgresql-11 package was updated by synchronizing it with the 11.20 release from upstream. Notable non-security updates were made to the distro-info-data database and the timezone database. The distro-info-data package was updated with the final expected release date of Debian 12, made aware of Debian 14 and Ubuntu 23.10, and was updated with the latest EOL dates for Ubuntu releases. The tzdata and libdatetime-timezone-perl packages were updated with the 2023c timezone database. The changes in these packages ensure that in addition to the latest security updates LTS users also have the latest information concerning Debian and Ubuntu support windows, as well as the latest timezone data for accurate worldwide timekeeping. LTS contributor Anton implemented an improvement to the Debian Security Tracker Unfixed vulnerabilities in unstable without a filed bug view, allowing for more effective management of CVEs which do not yet have a corresponding bug entry in the Debian BTS. LTS contributor Sylvain concluded an audit of obsolete packages still supported in LTS to ensure that new CVEs are properly associated. In this case, a package being obsolete means that it is no longer associated with a Debian release for which the Debian Security Team has direct responsibility. When this occurs, it is the responsibility of the LTS team to ensure that incoming CVEs are properly associated to packages which exist only in LTS. Finally, LTS contributors also contributed several updates to packages in unstable/testing/stable to fix CVEs. This helps package maintainers, addresses CVEs in current and future Debian releases, and ensures that the CVEs do not remain open for an extended period of time only for the LTS team to be required to deal with them much later in the future.

Thanks to our sponsors Sponsors that joined recently are in bold.

10 June 2023

Andrew Cater: 202306101949 - Release of install media - scripts running now

People are working quietly, cross-checking, reading back steps and running individual steps - we're really almost there for the install media.Just had a friendly, humorous meal out by the barbeque in Sledge's garden. It's been quite a long day but we're just finished.All this and then we'll probably have the first point release for Bookworm 12.1 in about a month. That will contain some few fixes which came in at the last minute and any other issues we've found today.BOOKWORM IS HERE!!

2 June 2023

Matt Brown: Calling time on DNSSEC: The costs exceed the benefits

I m calling time on DNSSEC. Last week, prompted by a change in my DNS hosting setup, I began removing it from the few personal zones I had signed. Then this Monday the .nz ccTLD experienced a multi-day availability incident triggered by the annual DNSSEC key rotation process. This incident broke several of my unsigned zones, which led me to say very unkind things about DNSSEC on Mastodon and now I feel compelled to more completely explain my thinking: For almost all domains and use-cases, the costs and risks of deploying DNSSEC outweigh the benefits it provides. Don t bother signing your zones. The .nz incident, while topical, is not the motivation or the trigger for this conclusion. Had it been a novel incident, it would still have been annoying, but novel incidents are how we learn so I have a small tolerance for them. The problem with DNSSEC is precisely that this incident was not novel, just the latest in a long and growing list. It s a clear pattern. DNSSEC is complex and risky to deploy. Choosing to sign your zone will almost inevitably mean that you will experience lower availability for your domain over time than if you leave it unsigned. Even if you have a team of DNS experts maintaining your zone and DNS infrastructure, the risk of routine operational tasks triggering a loss of availability (unrelated to any attempted attacks that DNSSEC may thwart) is very high - almost guaranteed to occur. Worse, because of the nature of DNS and DNSSEC these incidents will tend to be prolonged and out of your control to remediate in a timely fashion. The only benefit you get in return for accepting this almost certain reduction in availability is trust in the integrity of the DNS data a subset of your users (those who validate DNSSEC) receive. Trusted DNS data that is then used to communicate across an untrusted network layer. An untrusted network layer which you are almost certainly protecting with TLS which provides a more comprehensive and trustworthy set of security guarantees than DNSSEC is capable of, and provides those guarantees to all your users regardless of whether they are validating DNSSEC or not. In summary, in our modern world where TLS is ubiquitous, DNSSEC provides only a thin layer of redundant protection on top of the comprehensive guarantees provided by TLS, but adds significant operational complexity, cost and a high likelihood of lowered availability. In an ideal world, where the deployment cost of DNSSEC and the risk of DNSSEC-induced outages were both low, it would absolutely be desirable to have that redundancy in our layers of protection. In the real world, given the DNSSEC protocol we have today, the choice to avoid its complexity and rely on TLS alone is not at all painful or risky to make as the operator of an online service. In fact, it s the prudent choice that will result in better overall security outcomes for your users. Ignore DNSSEC and invest the time and resources you would have spent deploying it improving your TLS key and certificate management. Ironically, the one use-case where I think a valid counter-argument for this position can be made is TLDs (including ccTLDs such as .nz). Despite its many failings, DNSSEC is an Internet Standard, and as infrastructure providers, TLDs have an obligation to enable its use. Unfortunately this means that everyone has to bear the costs, complexities and availability risks that DNSSEC burdens these operators with. We can t avoid that fact, but we can avoid creating further costs, complexities and risks by choosing not to deploy DNSSEC on the rest of our non-TLD zones.

But DNSSEC will save us from the evil CA ecosystem! Historically, the strongest motivation for DNSSEC has not been the direct security benefits themselves (which as explained above are minimal compared to what TLS provides), but in the new capabilities and use-cases that could be enabled if DNS were able to provide integrity and trusted data to applications. Specifically, the promise of DNS-based Authentication of Named Entities (DANE) is that with DNSSEC we can be free of the X.509 certificate authority ecosystem and along with it the expensive certificate issuance racket and dubious trust properties that have long been its most distinguishing features. Ten years ago this was an extremely compelling proposition with significant potential to improve the Internet. That potential has gone unfulfilled. Instead of maturing as deployments progressed and associated operational experience was gained, DNSSEC has been beset by the discovery of issue after issue. Each of these has necessitated further changes and additions to the protocol, increasing complexity and deployment cost. For many zones, including significant zones like google.com (where I led the attempt to evaluate and deploy DNSSEC in the mid 2010s), it is simply infeasible to deploy the protocol at all, let alone in a reliable and dependable manner. While DNSSEC maturation and deployment has been languishing, the TLS ecosystem has been steadily and impressively improving. Thanks to the efforts of many individuals and companies, although still founded on the use of a set of root certificate authorities, the TLS and CA ecosystem today features transparency, validation and multi-party accountability that comprehensively build trust in the ability to depend and rely upon the security guarantees that TLS provides. When you use TLS today, you benefit from:
  • Free/cheap issuance from a number of different certificate authorities.
  • Regular, automated issuance/renewal via the ACME protocol.
  • Visibility into who has issued certificates for your domain and when through Certificate Transparency logs.
  • Confidence that certificates issued without certificate transparency (and therefore lacking an SCT) will not be accepted by the leading modern browsers.
  • The use of modern cryptographic protocols as a baseline, with a plausible and compelling story for how these can be steadily and promptly updated over time.
DNSSEC with DANE can match the TLS ecosystem on the first benefit (up front price) and perhaps makes the second benefit moot, but has no ability to match any of the other transparency and accountability measures that today s TLS ecosystem offers. If your ZSK is stolen, or a parent zone is compromised or coerced, validly signed TLSA records for a forged certificate can be produced and spoofed to users under attack with minimal chances of detection. Finally, in terms of overall trust in the roots of the system, the CA/Browser forum requirements continue to improve the accountability and transparency of TLS certificate authorities, significantly reducing the ability for any single actor (say a nefarious government) to subvert the system. The DNS root has a well established transparent multi-party system for establishing trust in the DNSSEC root itself, but at the TLD level, almost intentionally thanks to the hierarchical nature of DNS, DNSSEC has multiple single points of control (or coercion) which exist outside of any formal system of transparency or accountability. We ve moved from DANE being a potential improvement in security over TLS when it was first proposed, to being a definite regression from what TLS provides today. That s not to say that TLS is perfect, but given where we re at, we ll get a better security return from further investment and improvements in the TLS ecosystem than we will from trying to fix DNSSEC.

But TLS is not ubiquitous for non-HTTP applications The arguments above are most compelling when applied to the web-based HTTP-oriented ecosystem which has driven most of the TLS improvements we ve seen to date. Non-HTTP protocols are lagging in adoption of many of the improvements and best practices TLS has on the web. Some claim this need to provide a solution for non-HTTP, non-web applications provides a motivation to continue pushing DNSSEC deployment. I disagree, I think it provides a motivation to instead double-down on moving those applications to TLS. TLS as the new TCP. The problem is that costs of deploying and operating DNSSEC are largely fixed regardless of how many protocols you are intending to protect with it, and worse, the negative side-effects of DNSSEC deployment can and will easily spill over to affect zones and protocols that don t want or need DNSSEC s protection. To justify continued DNSSEC deployment and operation in this context means using a smaller set of benefits (just for the non-HTTP applications) to justify the already high costs of deploying DNSSEC itself, plus the cost of the risk that DNSSEC poses to the reliability to your websites. I don t see how that equation can ever balance, particularly when you evaluate it against the much lower costs of just turning on TLS for the rest of your non-HTTP protocols instead of deploying DNSSEC. MTA-STS is a worked example of how this can be achieved. If you re still not convinced, consider that even DNS itself is considering moving to TLS (via DoT and DoH) in order to add the confidentiality/privacy attributes the protocol currently lacks. I m not a huge fan of the latency implications of these approaches, but the ongoing discussion shows that clever solutions and mitigations for that may exist. DoT/DoH solve distinct problems from DNSSEC and in principle should be used in combination with it, but in a world where DNS itself is relying on TLS and therefore has eliminated the majority of spoofing and cache poisoning attacks through DoT/DoH deployment the benefit side of the DNSSEC equation gets smaller and smaller still while the costs remain the same.

OK, but better software or more careful operations can reduce DNSSEC s cost Some see the current DNSSEC costs simply as teething problems that will reduce as the software and tooling matures to provide more automation of the risky processes and operational teams learn from their mistakes or opt to simply transfer the risk by outsourcing the management and complexity to larger providers to take care of. I don t find these arguments compelling. We ve already had 15+ years to develop improved software for DNSSEC without success. What s changed that we should expect a better outcome this year or next? Nothing. Even if we did have better software or outsourced operations, the approach is still only hiding the costs behind automation or transferring the risk to another organisation. That may appear to work in the short-term, but eventually when the time comes to upgrade the software, migrate between providers or change registrars the debt will come due and incidents will occur. The problem is the complexity of the protocol itself. No amount of software improvement or outsourcing addresses that. After 15+ years of trying, I think it s worth considering that combining cryptography, caching and distributed consensus, some of the most fundamental and complex computer science problems, into a slow-moving and hard to evolve low-level infrastructure protocol while appropriately balancing security, performance and reliability appears to be beyond our collective ability. That doesn t have to be the end of the world, the improvements achieved in the TLS ecosystem over the same time frame provide a positive counter example - perhaps DNSSEC is simply focusing our attention at the wrong layer of the stack. Ideally secure DNS data would be something we could have, but if the complexity of DNSSEC is the price we have to pay to achieve it, I m out. I would rather opt to remain with the simpler yet insecure DNS protocol and compensate for its short comings at higher transport or application layers where experience shows we are able to more rapidly improve and develop our security capabilities.

Summing up For the vast majority of domains and use-cases there is simply no net benefit to deploying DNSSEC in 2023. I d even go so far as to say that if you ve already signed your zones, you should (carefully) move them back to being unsigned - you ll reduce the complexity of your operating environment and lower your risk of availability loss triggered by DNS. Your users will thank you. The threats that DNSSEC defends against are already amply defended by the now mature and still improving TLS ecosystem at the application layer, and investing in further improvements here carries far more return than deployment of DNSSEC. For TLDs, like .nz whose outage triggered this post, DNSSEC is not going anywhere and investment in mitigating its complexities and risks is an unfortunate burden that must be shouldered. While the full incident report of what went wrong with .nz is not yet available, the interim report already hints at some useful insights. It is important that InternetNZ publishes a full and comprehensive review so that the full set of learnings and improvements this incident can provide can be fully realised by .nz and other TLD operators stuck with the unenviable task of trying to safely operate DNSSEC.

Postscript After taking a few days to draft and edit this post, I ve just stumbled across a presentation from the well respected Geoff Huston at last weeks RIPE86 meeting. I ve only had time to skim the slides (video here) - they don t seem to disagree with my thinking regarding the futility of the current state of DNSSEC, but also contain some interesting ideas for what it might take for DNSSEC to become a compelling proposition. Probably worth a read/watch!

1 June 2023

Gunnar Wolf: Cheatable e-voting booths in Coahuila, Mexico, detected at the last minute

It s been a very long time I haven t blogged about e-voting, although some might remember it s been a topic I have long worked with; particularly, it was the topic of my 2018 Masters thesis, plus some five articles I wrote in the 2010-2018 period. After the thesis, I have to admit I got weary of the subject, and haven t pursued it anymore. So, I was saddened and dismayed to read that once again, as it has already happened the electoral authorities would set up a pilot e-voting program in the local elections this year, that would probably lead to a wider deployment next year, in the Federal elections. This year ( this week!), two States will have elections for their Governors and local Legislative branches: Coahuila (North, bordering with Texas) and Mexico (Center, surrounding Mexico City). They are very different states, demographically and in their development level. Pilot programs with e-voting booths have been seen in four states TTBOMK in the last ~15 years: Jalisco (West), Mexico City, State of Mexico and Coahuila. In Coahuila, several universities have teamed up with the Electoral Institute to develop their e-voting booth; a good thing that I can say about how this has been done in my country is that, at least, the Electoral Institute is providing their own implementations, instead of sourcing with e-booth vendors (which have their long, tragic story mostly in the USA, but also in other places). Not only that: They are subjecting the machines to audit processes. Not open audit processes, as demanded by academics in the field, but nevertheless, external, rigorous audit processes. But still, what me and other colleagues with Computer Security background oppose to is not a specific e-voting implementation, but the adoption of e-voting in general. If for nothing else, because of the extra complexity it brings, because of the many more checks that have to be put in place, and Because as programmers, we are aware of the ease with which bugs can creep in any given implementation both honest bugs (mistakes) and, much worse, bugs that are secretly requested and paid for. Anyway, leave this bit aside for a while. I m not implying there was any ill intent in the design or implementation of these e-voting booths. Two days ago, the Electoral Institute announced there was an important bug found in the Coahuila implementation. The bug consists, as far as I can understand from the information reported in newspapers, in: The problem was that the activation codes remained active after voting, so a voter could vote multiple times. This seems like an easy problem to be patched It most likely is. However, given the inability to patch, properly test, and deploy in a timely manner the fix to all of the booths (even though only 74 e-voting booths were to be deployed for this pilot), the whole pilot for Coahuila was scratched; Mexico State is voting with a different implementation that is not affected by this issue. This illustrates very well one of the main issues with e-voting technology: It requires a team of domain-specific experts to perform a highly specialized task (code and physical audits). I am happy and proud to say that part of the auditing experts were the professors of the Information Security Masters program of ESIME Culhuac n (the Masters program I was part of). The reaction by the Electoral Institute was correct. As far as I understand, there is no evidence suggesting this bug could have been purposefully built, but it s not impossible to rule it out. A traditional, paper-and-ink-based process is not only immune to attacks (or mistakes!) based on code such as this one, but can be audited by anybody. And that is, I believe, a fundamental property of democracy: ensuring the process is done right is not limited to a handful of domain experts. Not only that: In Mexico, I am sure there are hundreds of very proficient developers that could perform a code and equipment audit such as this one, but the audits are open by invitation only, so being an expert is not enough to get clearance to do this. In a democracy, the whole process should be observable and verifiable by anybody interested in doing so. Some links about this news:

Russell Coker: Do Desktop Computers Make Sense?

Laptop vs Desktop Price Currently the smaller and cheaper USB-C docks start at about $25 and Dell has a new Vostro with 8G of RAM and 2*USB-C ports for $788. That gives a bit over $800 for a laptop and dock vs $795 for the cheapest Dell desktop which also has 8G of RAM. For every way of buying laptops and desktops (EG buying from Officeworks, buying on ebay, etc) the prices for laptops and desktops seem very similar. For all those comparisons the desktop will typically have a faster CPU and more options for PCIe cards, larger storage, etc. But if you don t want to expand storage beyond the affordable 4TB NVMe/SSD devices, don t need to add PCIe cards, and don t need much CPU power then a laptop will do well. For the vast majority of the computer work I do my Thinkpad Carbon X1 Gen1 (from 2012) had plenty of CPU power. If someone who s not an expert in PC hardware was to buy a computer of a given age then laptops probably aren t more expensive than desktops even disregarding the fact that a laptop works without the need to purchase a monitor, a keyboard, or a mouse. I can get regular desktop PCs for almost nothing and get parts to upgrade them very cheaply but most people can t do that. I can also get a decent second-hand laptop and USB-C dock for well under $400. Servers and Gaming Systems For people doing serious programming or other compute or IO intensive tasks some variation on the server theme is the best option. That may be something more like the servers used by the r/homelab people than the corporate servers, or it might be something in the cloud, but a server is a server. If you are going to have a home server that s a tower PC then it makes sense to put a monitor on it and use it as a workstation. If your server makes so much noise that you can t spend much time in the same room or if it s hosted elsewhere then using a laptop to access it makes sense. Desktop computers for PC gaming makes sense as no-one seems to be making laptops with moderately powerful GPUs. The most powerful GPUs draw 150W which is more than most laptop PSUs can supply and even if a laptop PSU could supply that much there would be the issue of cooling. The Steam Deck [1] and the Nintendo Switch [2] can both work with USB-C docks. The PlayStation 5 [3] has a 350W PSU and doesn t support video over USB-C. The Steam Deck can do 8K resolution at 60Hz or 4K at 120Hz but presumably the newer Steam games will need a desktop PC with a more powerful GPU to properly use such resolutions. For people who want the best FPS rates on graphics intensive games it could make sense to have a tower PC. Also a laptop that s run at high CPU/GPU use for a long time will tend to have it s vents clogged by dust and possibly have the cooling fan wear out. Monitor Resolution Laptop support for a single 4K monitor became common in 2012 with the release of the Ivy Bridge mobile CPUs from Intel in 2012. My own experience of setting up 4K monitors for a Linux desktop in 2019 was that it was unreasonably painful and that the soon to be released Debian/Bookworm will make things work nicely for 4K monitors with KDE on X11. So laptop hardware has handled the case of a single high resolution monitor since before such monitors were cheap or common and before software supported it well. Of course at that time you had to use either a proprietary dock or a mini-DisplayPort to HDMI adaptor to get 4K working. But that was still easier than getting PCIe video cards supporting 4K resolution which is something that according to spec sheets wasn t well supported by affordable cards in 2017. Since USB-C became a standard feature in laptops in about 2017 support of more monitors than most people would want through a USB-C dock became standard. My Thinkpad X1 Carbon Gen5 which was released in 2017 will support 2*FullHD monitors plus a 4K monitor via a USB-C dock, I suspect it would do at least 2*4K monitors but haven t had a chance to test. Cheap USB-C docks supporting this sort of thing have only become common in the last year or so. How Many Computers per Home Among middle class Australians it s common to have multiple desktop PCs per household. One for each child who s over the age of about 13 and one for the parents seems to be reasonably common. Students in the later years of high-school and university students are often compelled to have laptops so having the number of laptops plus the number of desktops be larger than the population of the house probably isn t uncommon even among people who aren t really into computers. As an aside it s probably common among people who read my blog to have 2 desktops, a laptop, and a cloud server for their own personal use. But even among people who don t do that sort of thing having computers outnumber people in a home is probably common. A large portion of the computer users can do everything they need on a laptop. For gamers the graphics intensive games often run well on a console and that s probably the most effective way of getting to playing the games. Of course the fact that there is RGB RAM (RAM with Red, Green, and Blue LEDs to light up) along with a lot of other wild products sold to gamers suggests that gaming PCs are not about what runs the game most effectively and that an art/craft project with the PC is more important than actually playing games. Instead of having one desktop PC per bedroom and laptops for school/university as well it would make more sense to have a laptop per person and have a USB-C dock and monitor in each bedroom and a USB-C dock connected to a large screen TV in the lounge. This gives plenty of flexibility for moving around to do work and sharing what s on your computer with other people. It also allows taking a work computer home and having work with your monitor, having a friend bring their laptop to your home to work on something together, etc. For most people desktop computers don t make sense. While I think that convergence of phones with laptops and desktops is the way of the future [4] for most people having laptops take over all functions of desktops is the best option today.

Jamie McClelland: Enough about the AI Apocalypse Already

After watching Democracy Now s segment on artificial intelligence I started to wonder - am I out of step on this topic? When people claim artificial intelligence will surpass human intelligence and thus threaten humanity with extinction, they seem to be referring specifically to advances made with large language models. As I understand them, large language models are probability machines that have ingested massive amounts of text scraped from the Internet. They answer questions based on the probability of one series of words (their answer) following another series of words (the question). It seems like a stretch to call this intelligence, but if we accept that definition then it follows that this kind of intelligence is nothing remotely like human intelligence, which makes the claim that it will surpass human intelligence confusing. Hasn t this kind of machine learning surpassed us decades ago? Or when we say surpass does that simply refer to fooling people into thinking an AI machine is a human via conversation? That is an important milestone, but I m not ready to accept the turing test as proof of equal intelligence. Furthermore, large language models hallucinate and also reflect the biases of their training data. The word hallucinate seems like a euphemism, as if it could be corrected with the right medication when in fact it seems hard to avoid when your strategy is to correlate words based on probability. But even if you could solve the here is a completely wrong answer presented with sociopathic confidence problem, reflecting the biases of your data sources seems fairly intractable. In what world would a system with built-in bias be considered on the brink of surpassing human intelligence? The danger from LLMs seems to be their ability to convince people that their answers are correct, including their patently wrong and/or biased answers. Why do people think they are giving correct answers? Oh right terrifying right wing billionaires (with terrifying agendas have been claiming AI will exceed human intelligence and threaten humanity and every time they sign a hyperbolic statement they get front page mainstream coverage. And even progressive news outlets are spreading this narrative with minimal space for contrary opinions (thank you Tawana Petty from the Algorithmic Justice League for providing the only glimpse of reason in the segment). The belief that artificial intelligence is or will soon become omnipotent has real world harms today: specifically it creates the misperception that current LLMs are accurate, which paves the way for greater adoption among police forces, social service agencies, medical facilities and other places where racial and economic biases have life and death consequences. When the CEO of OpenAI calls the technology dangerous and in need of regulation, he gets both free advertising promoting the power and supposed accuracy of his product and the possibility of freezing further developments in the field that might challenge OpenAI s current dominance. The real threat to humanity is not AI, it s massive inequality and the use of tactics ranging from mundane bureaucracy to deadly force and incarceration to segregate the affluent from the growing number of people unable to make ends meet. We have spent decades training bureaucrats, judges and cops to robotically follow biased laws to maintain this order without compassion or empathy. Replacing them with AI would be make things worse and should be stopped. But, let s be clear, the narrative that AI is poised to surpass human intelligence and make humanity extinct is a dangerous distraction that runs counter to a much more important story about the very real and very present exploitative practices of the [companies building AI], who are rapidly centralizing power and increasing social inequities. . Maybe we should talk about that instead?

31 May 2023

Russell Coker: Genesis GV60

I recently test drove a Genesis GV70, but the GV60 [1] which I didn t test drive is a nicer car. The GV70 and GV60 are all electric so they are quiet and perform well. The GV70 has a sun-roof that opens, it was the first car I ve driven like that and I decided I don t like it. Having the shade open so I can see the sky while stuck in a traffic jam is nice though. The GV60 has a non-opening sun-roof with a shade that can be retracted, this is a feature I d really like to have in my next car. Electric cars as a general rule have good acceleration and are quiet, the GV70 performed as expected in that regard. It has a head-up display projected on the windscreen for the speed and the speed limit on the road in question which is handy. When driving in a car park it showed images from all sides which is really handy, I wish I had explored that feature more. The console is all electronic with a TFT display instead of mechanical instruments but the only significant difference this makes in driving is that when a turn indicator is used the console display shows a video feed for the blind-spot that matches the lane change direction. This is a significant safety feature and will reduce the incidence of collisions. But the capabilities of the hardware seem under utilised, hopefully they will release a software update at some future time to do more with it. The most significant benefit of the GV60 over the GV70 is that it has cameras instead of mirrors at the sides of the car. This reduces drag and also removes the need to adjust mirrors to match the height of the driver. Also for driver instruction the instructor and learner get to see the same view. A logical development of such cars is an expansion pack for instruction that has displays in the passenger seat to show the instructor the same instrument view as the driver sees. The minimum list driveaway price for the GV60 is $117,171.50 and for the GV70 it is $138,119.89 both of which are more than I m prepared to pay for a car. The GV60 apparently can be started by fingerprint which seems like a bad idea given the poor security of fingerprint sensors, but as regular car keys tend not to be too difficult to work around it probably doesn t matter. The Genesis web site makes it difficult to find the ranges of electric cars which is surprising. A Google search suggests that the GV60 can do 466Km and the GV70 can do 410Km which are both reasonable numbers and nothing to be ashamed of. The GV70 was a fun car to drive and the GV60 looks like it would be even better. I recommend that everyone who likes technology take one for a test drive, but for my own use I m looking for something that costs less than half as much.

30 May 2023

Russ Allbery: Review: The Mimicking of Known Successes

Review: The Mimicking of Known Successes, by Malka Older
Series: Mossa and Pleiti #1
Publisher: Tordotcom
Copyright: 2023
ISBN: 1-250-86051-2
Format: Kindle
Pages: 169
The Mimicking of Known Successes is a science fiction mystery novella, the first of an expected series. (The second novella is scheduled to be published in February of 2024.) Mossa is an Investigator, called in after a man disappears from the eastward platform on the 4 63' line. It's an isolated platform, five hours away from Mossa's base, and home to only four residential buildings and a pub. The most likely explanation is that the man jumped, but his behavior before he disappeared doesn't seem consistent with that theory. He was bragging about being from Valdegeld University, talking to anyone who would listen about the important work he was doing not typically the behavior of someone who is suicidal. Valdegeld is the obvious next stop in the investigation. Pleiti is a Classics scholar at Valdegeld. She is also Mossa's ex-girlfriend, making her both an obvious and a fraught person to ask for investigative help. Mossa is the last person she expected to be waiting for her on the railcar platform when she returns from a trip to visit her parents. The Mimicking of Known Successes is mostly a mystery, following Mossa's attempts to untangle the story of what happened to the disappeared man, but as you might have guessed there's a substantial sapphic romance subplot. It's also at least adjacent to Sherlock Holmes: Mossa is brilliant, observant, somewhat monomaniacal, and very bad at human relationships. All of this story except for the prologue is told from Pleiti's perspective as she plays a bit of a Watson role, finding Mossa unreadable, attractive, frustrating, and charming in turn. Following more recent Holmes adaptations, Mossa is portrayed as probably neurodivergent, although the story doesn't attach any specific labels. I have no strong opinions about this novella. It was fine? There's a mystery with a few twists, there's a sapphic romance of the second chance variety, there's a bit of action and a bit of hurt/comfort after the action, and it all felt comfortably entertaining but kind of predictable. Susan Stepney has a "passes the time" review rating, and while that may be a bit harsh, that's about where I ended up. The most interesting part of the story is the science fiction setting. We're some indefinite period into the future. Humans have completely messed up Earth to the point of making it uninhabitable. We then took a shot at terraforming Mars and messed that planet up to the point of uninhabitability as well. Now, what's left of humanity (maybe not all of it the story isn't clear) lives on platforms connected by rail lines high in the atmosphere of Jupiter. (Everyone in the story calls Jupiter "Giant" for reasons that I didn't follow, given that they didn't rename any of its moons.) Pleiti's position as a Classics scholar means that she studies Earth and its now-lost ecosystems, whereas the Modern faculty focus on their new platform life. This background does become relevant to the mystery, although exactly how is not clear at the start. I wouldn't call this a very realistic setting. One has to accept that people are living on platforms attached to artificial rings around the solar system's largest planet and walk around in shirt sleeves and only minor technological support due to "atmoshields" of some unspecified capability, and where the native atmosphere plays the role of London fog. Everything feels vaguely Edwardian, including to the occasional human porter and message runner, which matches the story concept but seems unlikely as a plausible future culture. I also disbelieve in humanity's ability to do anything to Earth that would make it less inhabitable than the clouds of Jupiter. That said, the setting is a lot of fun, which is probably more important. It's fun to try to visualize, and it has that slightly off-balance, occasionally surprising feel of science fiction settings where everyone is recognizably human but the things they consider routine and unremarkable are unexpected by the reader. This novella also has a great title. The Mimicking of Known Successes is simultaneously a reference a specific plot point from late in the story, a nod to the shape of the romance, and an acknowledgment of the Holmes pastiche, and all of those references work even better once you know what the plot point is. That was nicely done. This was not very memorable apart from the setting, but it was pleasant enough. I can't say that I'm inspired to pre-order the next novella in this series, but I also wouldn't object to reading it. If you're in the mood for gender-swapped Holmes in an exotic setting, you could do worse. Followed by The Imposition of Unnecessary Obstacles. Rating: 6 out of 10

29 May 2023

Jonathan Carter: MiniDebConf Germany 2023

This year I attended Debian Reunion Hamburg (aka MiniDebConf Germany) for the second time. My goal for this MiniDebConf was just to talk to people and make the most of the time I have there. No other specific plans or goals. Despite this simple goal, it was a very productive and successful event for me. Tuesday 23rd:
Wednesday 24th:
Thursday 25th:
Friday 26th:
Saturday 27th: Sunday 28th: Monday 29th:
Das is nicht gut.
Tuesday 30th:

Thank you to Holger for organising this event yet again!

Russell Coker: Considering Convergence

What is Convergence In 2013 Kyle Rankin (at the time Linux Journal columnist and CSO of Purism) wrote a Linux Journal article about Linux convergence [1] (which means using a phone and a dock to replace a desktop) featuring the Nokia N900 smart phone and a chroot environment on the Motorola Droid 4 Android phone. Both of them have very limited hardware even by the standards of the day and neither of which were systems I d consider using all the time. None of the Android phones I used at that time were at all comparable to any sort of desktop system I d want to use. Hardware for Convergence Comparing a Phone to a Laptop The first hardware issue for convergence is docks and other accessories to attach a small computer to hardware designed for larger computers. Laptop docks have been around for decades and for decades I haven t been using them because they have all been expensive and specific to a particular model of laptop. Having an expensive dock at home and an expensive dock at the office and then replacing them both when the laptop is replaced may work well for some people but wasn t something I wanted to do. The USB-C interface supports data, power, and DisplayPort video over the same cable and now USB-C docks start at about $20 on eBay and dock functionality is built in to many new monitors. I can take a USB-C device to the office of any large company and know there s a good chance that there will be a USB-C dock ready for me to use. The fact that USB-C is a standard feature for phones gives obvious potential for convergence. The next issue is performance. The Passmark benchmark seems like a reasonable way to compare CPUs [2]. It may not be the best benchmark but it has an excellent set of published results for Intel and AMD CPUs. I ran that benchmark on my Librem5 [3] and got a result of 507 for the CPU score. At the end of 2017 I got a Thinkpad X301 [4] which rates 678 on Passmark. So the Librem5 has 3/4 the CPU power of a laptop that was OK for my use in 2018. Given that the X301 was about the minimum specs for a PC that I can use (for things other than serious compiles, running VMs, etc) the Librem 5 has 3/4 the CPU power, only 3G of RAM compared to 6G, and 32G of storage compared to 64G. Here is the Passmark page for my Librem5 [5]. As an aside my Libnrem5 is apparently 25% faster than the other results for the same CPU did the Purism people do something to make their device faster than most? For me the Librem5 would be at the very low end of what I would consider a usable desktop system. A friend s N900 (like the one Kyle used) won t complete the Passmark test apparently due to the Extended Instructions (NEON) test failing. But of the rest of the tests most of them gave a result that was well below 10% of the result from the Librem5 and only the Compression and CPU Single Threaded tests managed to exceed 1/4 the speed of the Librem5. One thing to note when considering the specs of phones vs desktop systems is that the MicroSD cards designed for use in dashcams and other continuous recording devices have TBW ratings that compare well to SSDs designed for use in PCs, so swap to a MicroSD card should work reasonably well and be significantly faster than the hard disks I was using for swap in 2013! In 2013 I was using a Thinkpad T420 as my main system [6], it had 8G of RAM (the same as my current laptop) although I noted that 4G was slow but usable at the time. Basically it seems that the Librem5 was about the sort of hardware I could have used for convergence in 2013. But by today s standards and with the need to drive 4K monitors etc it s not that great. The N900 hardware specs seem very similar to the Thinkpads I was using from 1998 to about 2003. However a device for convergence will usually do more things than a laptop (IE phone and camera functionality) and software had become significantly more bloated in 1998 to 2013 time period. A Linux desktop system performed reasonably with 32MB of RAM in 1998 but by 2013 even 2G was limiting. Software Issues for Convergence Jeremiah Foster (Director PureOS at Purism) wrote an interesting overview of some of the software issues of convergence [7]. One of the most obvious is that the best app design for a small screen is often very different from that for a large screen. Phone apps usually have a single window that shows a view of only one part of the data that is being worked on (EG an email program that shows a list of messages or the contents of a single message but not both). Desktop apps of any complexity will either have support for multiple windows for different data (EG two messages displayed in different windows) or a single window with multiple different types of data (EG message list and a single message). What we ideally want is all the important apps to support changing modes when the active display is changed to one of a different size/resolution. The Purism people are doing some really good work in this regard. But it is a large project that needs to involve a huge range of apps. The next thing that needs to be addressed is the OS interface for managing apps and metadata. On a phone you swipe from one part of the screen to get a list of apps while on a desktop you will probably have a small section of a large monitor reserved for showing a window list. On a desktop you will typically have an app to manage a list of items copied to the clipboard while on Android and iOS there is AFAIK no standard way to do that (there is a selection of apps in the Google Play Store to do this sort of thing). Purism has a blog post by Sebastian Krzyszkowiak about some of the development of the OS to make it work better for convergence and the status of getting it in Debian [8]. The limitations in phone hardware force changes to the software. Software needs to use less memory because phone RAM can t be upgraded. The OS needs to be configured for low RAM use which includes technologies like the zram kernel memory compression feature. Security When mobile phones first came out they were used for less secret data. Loss of a phone was annoying and expensive but not a security problem. Now phone theft for the purpose of gaining access to resources stored on the phone is becoming a known crime, here is a news report about a thief stealing credit cards and phones to receive the SMS notifications from banks [9]. We should expect that trend to continue, stealing mobile devices for ssh keys, management tools for cloud services, etc is something we should expect to happen. A problem with mobile phones in current use is that they have one login used for all access from trivial things done in low security environments (EG paying for public transport) to sensitive things done in more secure environments (EG online banking and healthcare). Some applications take extra precautions for this EG the Android app I use for online banking requires authentication before performing any operations. The Samsung version of Android has a system called Knox for running a separate secured workspace [10]. I don t think that the Knox approach would work well for a full Linux desktop environment, but something that provides some similar features would be a really good idea. Also running apps in containers as much as possible would be a good security feature, this is done by default in Android and desktop OSs could benefit from it. The Linux desktop security model of logging in to a single account and getting access to everything has been outdated for a long time, probably ever since single-user Linux systems became popular. We need to change this for many reasons and convergence just makes it more urgent. Conclusion I have become convinced that convergence is the way of the future. It has the potential to make transporting computers easier, purchasing cheaper (buy just a phone and not buy desktop and laptop systems), and access to data more convenient. The Librem5 doesn t seem up to the task for my use due to being slow and having short battery life, the PinePhone Pro has more powerful hardware and allegedly has better battery life [11] so it might work for my needs. The PinePhone Pro probably won t meet the desktop computing needs of most people, but hardware keeps getting faster and cheaper so eventually most people could have their computing needs satisfied with a phone. The current state of software for convergence and for Linux desktop security needs some improvement. I have some experience with Linux security so this is something I can help work on. To work on improving this I asked Linux Australia for a grant for me and a friend to get PinePhone Pro devices and a selection of accessories to go with them. Having both a Librem5 and a PinePhone Pro means that I can test software in different configurations which will make developing software easier. Also having a friend who s working on similar things will help a lot, especially as he has some low level hardware skills that I lack. Linux Australia awarded the grant and now the PinePhones are in transit. Hopefully I will have a PinePhone in a couple of weeks to start work on this.

Russ Allbery: Book haul

I think this is partial because I also have a stack of other books that I missed recording. At some point, I should stop using this method to track book acquisitions in favor of one of the many programs intended for this purpose, but it's in the long list of other things I really should do one of these days. As usual, I have already read and reviewed a few of these. I might be getting marginally better at reading books shortly after I acquire them? Maybe? Steven Brust Tsalmoth (sff)
C.L. Clark The Faithless (sff)
Oliver Darkshire Once Upon a Tome (non-fiction)
Hernan Diaz Trust (mainstream)
S.B. Divya Meru (sff)
Kate Elliott Furious Heaven (sff)
Steven Flavall Before We Go Live (non-fiction)
R.F. Kuang Babel (sff)
Laurie Marks Dancing Jack (sff)
Arkady Martine Rose/House (sff)
Madeline Miller Circe (sff)
Jenny Odell Saving Time (non-fiction)
Malka Older The Mimicking of Known Successes (sff)
Sabaa Tahir An Ember in the Ashes (sff)
Emily Tesh Some Desperate Glory (sff)
Valerie Valdes Chilling Effect (sff)

26 May 2023

Valhalla's Things: Late Victorian Combinations

Posted on May 26, 2023
A woman wearing a white linen combination suite, with a very fitted top, small sleevelets that cover the armpits (to protect the next layers from sweat) and split drawers. The suite buttons up along the front (where it is a bit tight around the bust) and has a line of lace at the neckline and two tucks plus some lace at the legs. Some time ago, on an early Friday afternoon our internet connection died. After a reasonable time had passed we called the customer service, they told us that they would look into it and then call us back. On Friday evening we had not heard from them, and I was starting to get worried. At the time in the evening when I would have been relaxing online I grabbed the first Victorian sewing-related book I found on my hard disk and started to read it. For the record, it wasn t actually Victorian, it was Margaret J. Blair. System of Sewing and Garment Drafting. from 1904, but I also had available for comparison the earlier and smaller Margaret Blair. System of Garment Drafting. from 1897. A page from the book showing the top part of a pattern with all construction lines Anyway, this book had a system to draft a pair of combinations (chemise top + drawers); and months ago I had already tried to draft a pair from another system, but they didn t really fit and they were dropped low on the priority list, so on a whim I decided to try and draft them again with this new-to-me system. Around 23:00 in the night the pattern was ready, and I realized that my SO had gone to sleep without waiting for me, as I looked too busy to be interrupted. The next few days were quite stressful (we didn t get our internet back until Wednesday) and while I couldn t work at my day job I didn t sew as much as I could have done, but by the end of the week I had an almost complete mockup from an old sheet, and could see that it wasn t great, but it was a good start. One reason why the mockup took a whole week is that of course I started to sew by machine, but then I wanted flat-felled seams, and felling them by hand is so much neater, isn t it? And let me just say, I m grateful for the fact that I don t depend on streaming services for media, but I have a healthy mix of DVDs and stuff I had already temporary downloaded to watch later, because handsewing and being stressed out without watching something is not really great. Anyway, the mockup was a bit short on the crotch, but by the time I could try it on and be sure I was invested enough in it that I decided to work around the issue by inserting a strip of lace around the waist. And then I went back to the pattern to fix it properly, and found out that I had drafted the back of the drawers completely wrong, making a seam shorter rather than longer as it should have been. ooops. I fixed the pattern, and then decided that YOLO and cut the new version directly on some lightweight linen fabric I had originally planned to use in this project. The result is still not perfect, but good enough, and I finished it with a very restrained amount of lace at the neckline and hems, wore it one day when the weather was warm (loved the linen on the skin) and it s ready to be worn again when the weather will be back to being warm (hopefully not too soon). The last problem was taking pictures of this underwear in a way that preserves the decency (and it even had to be outdoors, for the light!). This was solved by wearing leggings and a matched long sleeved shirt under the combinations, and then promptly forgetting everything about decency and, well, you can see what happened. A woman mooning by keeping the back of split drawers open with her hands, but at least there are black leggings under them. The pattern is, as usual, published on my pattern website as #FreeSoftWear. And then, I started thinking about knits. In the late Victorian and Edwardian eras knit underwear was a thing, also thanks to the influence of various aspects of the rational dress movement; reformers such as Gustav J ger advocated for wool underwear, but mail order catalogues from the era such as https://archive.org/details/cataloguefallwin00macy (starting from page 67) have listings for both cotton and wool ones. From what I could find, back then they would have been either handknit at home or made to shape on industrial knitting machines; patterns for the former are available online, but the latter would probably require a knitting machine that I don t currently1 have. However, this is underwear that is not going to be seen by anybody2, and I believe that by using flat knit fabric one can get a decent functional approximation. In The Stash I have a few meters of a worked cotton jersey with a pretty comfy feel, and to make a long story short: this happened. a woman wearing a black cotton jersey combination suite; the front is sewn shut, but the neck is wide and finished with elastic.  The top part is pretty fitted, but becomes baggier around the crotch area and the legs are a comfortable width. I suspect that the linen one will get worn a lot this summer (linen on the skin. nothing else need to be said), while the cotton one will be stored away for winter. And then maybe I may make a couple more, if I find out that I m using it enough.

  1. cue ominous music. But first I would need space to actually keep and use it :)
  2. other than me, my SO, any costuming friend I may happen to change in the presence of, and everybody on the internet in these pictures.

23 May 2023

Craig Small: Devices with cgroup v2

Docker and other container systems by default restrict access to devices on the host. They used to do this with cgroups with the cgroup v1 system, however, the second version of cgroups removed this controller and the man page says:
Cgroup v2 device controller has no interface files and is implemented on top of cgroup BPF.
https://www.kernel.org/doc/Documentation/admin-guide/cgroup-v2.rst
That is just awesome, nothing to see here, go look at the BPF documents if you have cgroup v2. With cgroup v1 if you wanted to know what devices were permitted, you just would cat /sys/fs/cgroup/XX/devices.allow and you were done! The kernel documentation is not very helpful, sure its something in BPF and has something to do with the cgroup BPF specifically, but what does that mean? There doesn t seem to be an easy corresponding method to get the same information. So to see what restrictions a docker container has, we will have to:
  1. Find what cgroup the programs running in the container belong to
  2. Find what is the eBPF program ID that is attached to our container cgroup
  3. Dump the eBPF program to a text file
  4. Try to interpret the eBPF syntax
The last step is by far the most difficult.

Finding a container s cgroup All containers have a short ID and a long ID. When you run the docker ps command, you get the short id. To get the long id you can either use the --no-trunc flag or just guess from the short ID. I usually do the second.
$ docker ps 
CONTAINER ID   IMAGE            COMMAND       CREATED          STATUS          PORTS     NAMES
a3c53d8aaec2   debian:minicom   "/bin/bash"   19 minutes ago   Up 19 minutes             inspiring_shannon
So the short ID is a3c53d8aaec2 and the long ID is a big ugly hex string starting with that. I generally just paste the relevant part in the next step and hit tab. For this container the cgroup is /sys/fs/cgroup/system.slice/docker-a3c53d8aaec23c256124f03d208732484714219c8b5f90dc1c3b4ab00f0b7779.scope/ Notice that the last directory has docker- then the short ID. If you re not sure of the exact path. The /sys/fs/cgroup is the cgroup v2 mount point which can be found with mount -t cgroup2 and then rest is the actual cgroup name. If you know the process running in the container then the cgroup column in ps will show you.
$ ps -o pid,comm,cgroup 140064
    PID COMMAND         CGROUP
 140064 bash            0::/system.slice/docker-a3c53d8aaec23c256124f03d208732484714219c8b5f90dc1c3b4ab00f0b7779.scope
Either way, you will have your cgroup path.

eBPF programs and cgroups Next we will need to get the eBPF program ID that is attached to our recently found cgroup. To do this, we will need to use the bpftool. One thing that threw me for a long time is when the tool talks about a program or a PROG ID they are talking about the eBPF programs, not your processes! With that out of the way, let s find the prog id.
$ sudo bpftool cgroup list /sys/fs/cgroup/system.slice/docker-a3c53d8aaec23c256124f03d208732484714219c8b5f90dc1c3b4ab00f0b7779.scope/
ID       AttachType      AttachFlags     Name
90       cgroup_device   multi
Our cgroup is attached to eBPF prog with ID of 90 and the type of program is cgroup _device.

Dumping the eBPF program Next, we need to get the actual code that is run every time a process running in the cgroup tries to access a device. The program will take some parameters and will return either a 1 for yes you are allowed or a zero for permission denied. Don t use the file option as it dumps the program in binary format. The text version is hard enough to understand.
sudo bpftool prog dump xlated id 90 > myebpf.txt
Congratulations! You now have the eBPF program in a human-readable (?) format.

Interpreting the eBPF program The eBPF format as dumped is not exactly user friendly. It probably helps to first go and look at an example program to see what is going on. You ll see that the program splits type (lower 4 bytes) and access (higher 4 bytes) and then does comparisons on those values. The eBPF has something similar:
   0: (61) r2 = *(u32 *)(r1 +0)
   1: (54) w2 &= 65535
   2: (61) r3 = *(u32 *)(r1 +0)
   3: (74) w3 >>= 16
   4: (61) r4 = *(u32 *)(r1 +4)
   5: (61) r5 = *(u32 *)(r1 +8)
What we find is that once we get past the first few lines filtering the given value that the comparison lines have:
  • r2 is the device type, 1 is block, 2 is character.
  • r3 is the device access, it s used with r1 for comparisons after masking the relevant bits. mknod, read and write are 1,2 and 3 respectively.
  • r4 is the major number
  • r5 is the minor number
For a even pretty simple setup, you are going to have around 60 lines of eBPF code to look at. Luckily, you ll often find the lines for the command options you added will be near the end, which makes it easier. For example:
  63: (55) if r2 != 0x2 goto pc+4
  64: (55) if r4 != 0x64 goto pc+3
  65: (55) if r5 != 0x2a goto pc+2
  66: (b4) w0 = 1
  67: (95) exit
This is a container using the option --device-cgroup-rule='c 100:42 rwm'. It is checking if r2 (device type) is 2 (char) and r4 (major device number) is 0x64 or 100 and r5 (minor device number) is 0x2a or 42. If any of those are not true, move to the next section, otherwise return with 1 (permit). We have all access modes permitted so it doesn t check for it. The previous example has all permissions for our device with id 100:42, what about if we only want write access with the option --device-cgroup-rule='c 100:42 r'. The resulting eBPF is:
  63: (55) if r2 != 0x2 goto pc+7  
  64: (bc) w1 = w3
  65: (54) w1 &= 2
  66: (5d) if r1 != r3 goto pc+4
  67: (55) if r4 != 0x64 goto pc+3
  68: (55) if r5 != 0x2a goto pc+2
  69: (b4) w0 = 1
  70: (95) exit
The code is almost the same but we are checking that w3 only has the second bit set, which is for reading, effectively checking for X==X&2. It s a cautious approach meaning no access still passes but multiple bits set will fail.

The device option docker run allows you to specify files you want to grant access to your containers with the --device flag. This flag actually does two things. The first is to great the device file in the containers /dev directory, effectively doing a mknod command. The second thing is to adjust the eBPF program. If the device file we specified actually did have a major number of 100 and a minor of 42, the eBPF would look exactly like the above snippets.

What about privileged? So we have used the direct cgroup options here, what does the --privileged flag do? This lets the container have full access to all the devices (if the user running the process is allowed). Like the --device flag, it makes the device files as well, but what does the filtering look like? We still have a cgroup but the eBPF program is greatly simplified, here it is in full:
   0: (61) r2 = *(u32 *)(r1 +0)
   1: (54) w2 &= 65535
   2: (61) r3 = *(u32 *)(r1 +0)
   3: (74) w3 >>= 16
   4: (61) r4 = *(u32 *)(r1 +4)
   5: (61) r5 = *(u32 *)(r1 +8)
   6: (b4) w0 = 1
   7: (95) exit
There is the usual setup lines and then, return 1. Everyone is a winner for all devices and access types!

Jonathan Dowland: neovim plugins and distributions

I've been watching the neovim community for a while and what seems like a cambrian explosion of plugins emerging. A few weeks back I decided to spend most of a "day of learning" on investigating some of the plugins and technologies that I'd read about: Language Server Protocol, TreeSitter, neorg (a grandiose organiser plugin), etc. It didn't go so well. I spent most of my time fighting version incompatibilities or tracing through scant documentation or code to figure out what plugin was incompatible with which other. There's definitely a line where crossing it is spending too much time playing with your tools instead of creating. On the other hand, there's definitely value in honing your tools and learning about new technologies. Everyone's line is probably in a different place. I've come to the conclusion that I don't have the time or inclination (or both) to approach exploring the neovim universe in this way. There exist a number of plugin "distributions" (such as LunarVim): collections of pre- configured and integrated plugins that you can try to use out-of-the-box. Next time I think I'll pick one up and give that a try &emdash independently from my existing configuration &emdash and see which ideas from it I might like to adopt. shared vimrcs Some folks upload their vim or neovim configurations in their entirety for others to see. I noticed Jess Frazelle had published hers so I took a look. I suppose one could evaluate a bunch of plugins and configuration in isolation using a shared vimrc like this, in the same was as a distribution. bufferline Amongst the plugins she uses was bufferline, a plugin to re-work neovim's tab bar to behave like tab bars from more conventional editors1. I don't make use of neovim's tabs at all2, so I would lose nothing having the (presently hidden) tab bar reworked, so I thought I'd give it a go. I had to disable an existing plugin lightline, which I've had enabled for years but I wasn't sure I was getting much value from. Apparently it also messes with the tab bar! Disabling it, at least for now, at least means I'll find out if I miss it. I am already using vim-buffergator as a means of seeing and managing open buffers: a hotkey opens a sidebar with a list of open buffers, to switch between or close. Bufferline gives me a more immediate, always-present view of open buffers, which is faintly useful: but not much. Perhaps I'd like it more if I was coming from an editor that had made it more of an expected feature. The two things I noticed about it that aren't especially useful for me: when browsing around vimwiki pages, I quickly open a lot of buffers. The horizontal line fills up very quickly. Even when I don't, I habitually have quite a lot of buffers open, and the horizontal line is quickly overwhelmed. I have found myself closing open buffers with the mouse, which I didn't do before. vert Since I have brought up a neovim UI feature (tabs) I thought I'd briefly mention my new favourite neovim built-in command: vert. Quite a few plugins and commands open up a new window (e.g. git-fugitive, Man, etc.) and they typically do so in a horizontal split. I'm increasingly preferring vertical splits. Prefixing any3 such command with vert forces the split to be vertical instead.

  1. in this case the direct influence was apparently DOOM Emacs
  2. (neo)vim's notion of tabs is completely different to what you might expect from other UI models.
  3. at least, I haven't found one that doesn't work yet

Russ Allbery: Review: A Half-Built Garden

Review: A Half-Built Garden, by Ruthanna Emrys
Publisher: Tordotcom
Copyright: 2022
ISBN: 1-250-21097-6
Format: Kindle
Pages: 340
The climate apocalypse has happened. Humans woke up to the danger, but a little bit too late. Over one billion people died. But the world on the other side of that apocalypse is not entirely grim. The corporations responsible for so much of the damage have been pushed out of society and isolated on their independent "aislands," traded with only grudgingly for the few commodities the rest of the world has not yet learned how to manufacture without them. Traditional governments have largely collapsed, although they cling to increasingly irrelevant trappings of power. In their place arose the watershed networks: a new way of living with both nature and other humans, built around a mix of anarchic consensus and direct democracy, with conservation and stewardship of the natural environment at its core. Therefore, when the aliens arrive near Bear Island on the Potomac River, they're not detected by powerful telescopes and met by military jets. Instead, their waste sets off water sensors, and they're met by the two women on call for alert duty, carrying a nursing infant and backed by the real-time discussion and consensus technology of the watershed's dandelion network. (Emrys is far from the first person to name something a "dandelion network," so be aware that the usage in this book seems unrelated to the charities or blockchain network.) This is a first contact novel, but it's one that skips over the typical focus of the subgenre. The alien Ringers are completely fluent in English down to subtle nuance of emotion and connotation (supposedly due to observation of our radio and TV signals), have translation devices, and in some cases can make our speech sounds directly. Despite significantly different body shapes, they are immediately comprehensible; differences are limited mostly to family structure, reproduction, and social norms. This is Star Trek first contact, not the type more typical of written science fiction. That feels unrealistic, but it's also obviously an authorial choice to jump directly to the part of the story that Emrys wants to write. The Ringers have come to save humanity. In their experience, technological civilization is inherently incompatible with planets. Technology will destroy the planet, and the planet will in turn destroy the species unless they can escape. They have reached other worlds multiple times before, only to discover that they were too late and everyone is already dead. This is the first time they've arrived in time, and they're eager to help humanity off its dying planet to join them in the Dyson sphere of space habitats they are constructing. Planets, to them, are a nest and a launching pad, something to eventually abandon and break down for spare parts. The small, unexpected wrinkle is that Judy, Carol, and the rest of their watershed network are not interested in leaving Earth. They've finally figured out the most critical pieces of environmental balance. Earth is going to get hotter for a while, but the trend is slowing. What they're doing is working. Humanity would benefit greatly from Ringer technology and the expertise that comes from managing closed habitat ecosystems, but they don't need rescuing. This goes over about as well as a toddler saying that playing in the road is perfectly safe. This is a fantastic hook for a science fiction novel. It does exactly what a great science fiction premise should do: takes current concerns (environmentalism, space boosterism, the debatable primacy of humans as a species, the appropriate role of space colonization, the tension between hopefulness and doomcasting about climate change) and uses the freedom of science fiction to twist them around and come at them from an entirely different angle. The design of the aliens is excellent for this purpose. The Ringers are not one alien species; they are two, evolved on different planets in the same system. The plains dwellers developed space flight first and went to meet the tree dwellers, and while their relationship is not entirely without hierarchy (the plains dwellers clearly lead on most matters), it's extensively symbiotic. They now form mixed families of both species, and have a rich cultural history of stories about first contact, interspecies conflicts and cooperation, and all the perils and misunderstandings that they successfully navigated. It makes their approach to humanity more believable to know that they have done first contact before and are building on a model. Their concern for humanity is credibly sincere. The joining of two species was wildly successful for them and they truly want to add a third. The politics on the human side are satisfyingly complicated. The watershed network may have made first contact, but the US government (in the form of NASA) is close behind, attempting to lean on its widely ignored formal power. The corporations are farther away and therefore slower to arrive, but the alien visitors have a damaged ship and need space to construct a subspace beacon and Asterion is happy to offer a site on one of its New Zealand islands. The corporate representatives are salivating at the chance to escape Earth and its environmental regulation for uncontrolled space construction and a new market of trillions of Ringers. NASA's attitude is more measured, but their representative is easily persuaded that the true future of humanity is in space. The work the watershed networks are doing is difficult, uncertain, and involves a lot of sacrifice, particularly for corporate consumer lifestyles. With such an attractive alien offer on the table, why stay and work so hard for an uncertain future? Maybe the Ringers are right. And then the dandelion networks that the watersheds use as the core of their governance and decision-making system all crash. The setup was great; I was completely invested. The execution was more mixed. There are some things I really liked, some things that I thought were a bit too easy or predictable, and several places where I wish Emrys had dug deeper and provided more detail. I thought the last third of the book fizzled a little, although some of the secondary characters Emrys introduces are delightful and carry the momentum of the story when the politics feel a bit lacking. If you tried to form a mental image of ecofeminist political science fiction with 1970s utopian sensibilities, but updated for the concerns of the 2020s, you would probably come very close to the politics of the watershed networks. There are considerably more breastfeedings and diaper changes than the average SF novel. Two of the primary characters are transgender, but with very different experiences with transition. Pronoun pins are an ubiquitous article of clothing. One of the characters has a prosthetic limb. Another character who becomes important later in the story codes as autistic. None of this felt gratuitous; the characters do come across as obsessed with gender, but in a way that I found believable. The human diversity is well-integrated with the story, shapes the characters, creates practical challenges, and has subtle (and sometimes not so subtle) political ramifications. But, and I say this with love because while these are not quite my people they're closely adjacent to my people, the social politics of this book are a very specific type of white feminist collaborative utopianism. When religion makes an appearance, I was completely unsurprised to find that several of the characters are Jewish. Race never makes a significant appearance at all. It's the sort of book where the throw-away references to other important watershed networks includes African ones, and the characters would doubtless try to be sensitive to racial issues if they came up, but somehow they never do. (If you're wondering if there's polyamory in this book, yes, yes there is, and also I suspect you know exactly what culture I'm talking about.) This is not intended as a criticism, just more of a calibration. All science fiction publishing houses could focus only on this specific political perspective for a year and the results would still be dwarfed by the towering accumulated pile of thoughtless paeans to capitalism. Ecofeminism has a long history in the genre but still doesn't show up in that many books, and we're far from exhausting the space of possibilities for what a consensus-based politics could look like with extensive computer support. But this book has a highly specific point of view, enough so that there won't be many thought-provoking surprises if you're already familiar with this school of political thought. The politics are also very earnest in a way that I admit provoked a bit of eyerolling. Emrys pushes all of the political conflict into the contrasts between the human factions, but I would have liked more internal disagreement within the watershed networks over principles rather than tactics. The degree of ideological agreement within the watershed group felt a bit unrealistic. But, that said, at least politics truly matters and the characters wrestle directly with some tricky questions. I would have liked to see more specifics about the dandelion network and the exact mechanics of the consensus decision process, since that sort of thing is my jam, but we at least get more details than are typical in science fiction. I'll take this over cynical libertarianism any day. Gender plays a huge role in this story, enough so that you should avoid this book if you're not interested in exploring gender conceptions. One of the two alien races is matriarchal and places immense social value on motherhood, and it's culturally expected to bring your children with you for any important negotiation. The watersheds actively embrace this, or at worst find it comfortable to use for their advantage, despite a few hints that the matriarchy of the plains aliens may have a very serious long-term demographic problem. In an interesting twist, it's the mostly-evil corporations that truly challenge gender roles, albeit by turning it into an opportunity to sell more clothing. The Asterion corporate representatives are, as expected, mostly the villains of the plot: flashy, hierarchical, consumerist, greedy, and exploitative. But gender among the corporations is purely a matter of public performance, one of a set of roles that you can put on and off as you choose and signal with clothing. They mostly use neopronouns, change pronouns as frequently as their clothing, and treat any question of body plumbing as intensely private. By comparison, the very 2020 attitudes of the watersheds towards gender felt oddly conservative and essentialist, and the main characters get flustered and annoyed by the ever-fluid corporate gender presentation. I wish Emrys had done more with this. As you can tell, I have a lot of thoughts and a lot of quibbles. Another example: computer security plays an important role in the plot and was sufficiently well-described that I have serious questions about the system architecture and security model of the dandelion networks. But, as with decision-making and gender, the more important takeaway is that Emrys takes enough risks and describes enough interesting ideas that there's a lot of meat here to argue with. That, more than getting everything right, is what a good science fiction novel should do. A Half-Built Garden is written from a very specific political stance that may make it a bit predictable or off-putting, and I thought the tail end of the book had some plot and resolution problems, but arguing with it was one of the more intellectually satisfying science fiction reading experiences I've had recently. You have to be in the right mood, but recommended for when you are. Rating: 7 out of 10

22 May 2023

Russ Allbery: Review: Tsalmoth

Review: Tsalmoth, by Steven Brust
Series: Vlad Taltos #16
Publisher: Tor
Copyright: 2023
ISBN: 1-4668-8970-5
Format: Kindle
Pages: 277
Tsalmoth is the sixteenth book in the Vlad Taltos series and (some fans of the series groan) yet another flashback novel to earlier in Vlad's life. It takes place between Yendi and the interludes in Dragon (or, perhaps more straightforwardly, between Yendi and Jhereg. Most of the books of this series stand alone to some extent, so you could read this book out of order and probably not be horribly confused, but I suspect it would also feel weirdly pointless outside of the context of the larger series. We're back to Vlad running a fairly small operation as a Jhereg, who are the Dragaeran version of organized crime. A Tsalmoth who owes Vlad eight hundred imperials has rudely gotten himself murdered, thoroughly enough that he can't be revived. That's a considerable amount of money, and Vlad would like it back, so he starts poking around. As you might expect if you've read any other book in this series, things then get a bit complicated. This time, they involve Jhereg politics, Tsalmoth house politics, and necromancy (which in this universe is more about dimensional travel than it is about resurrecting the dead). The main story is... fine. Kragar is around being unnoticeable as always, Vlad is being cocky and stubborn and bantering with everyone, and what appears to be a straightforward illegal business relationship turns out to involve Dragaeran magic and thus Vlad's highly-placed friends. As usual, they're intellectually curious about the magic and largely ambivalent to the rest of Vlad's endeavors. The most enjoyable part of the story is Vlad's insistence on getting his money back while everyone else in the story cannot believe he would be this persistent over eight hundred imperials and is certain he has some other motive. It's otherwise a fairly forgettable little adventure. The implications for the broader series, though, are significant, although essentially none of the payoff is here. Brust has been keeping a major secret about Vlad that's finally revealed here, one that has little impact on the plot of this book (although it causes Vlad a lot of angst) but which I suspect will become very important later in the series. That was intriguing but rather unsatisfying, since it stays only a future hook with an attached justification for why we're only finding out about it now. If one has read the rest of the series, it's also nice to see Vlad and Cawti working together, bantering with each other and playing off of each other's strengths. It's reminiscent of the best parts of Yendi. As with many of the books of this series, the chapter introductions tell a parallel story; this time, it's Vlad and Cawti's wedding. I think previous books already mentioned that Vlad is narrating this series into some sort of recording device, and a bit about why he's doing that, but this is made quite explicit here. We get as much of the surrounding frame as we've ever seen before. There are no obvious plot consequences from this it's still all hints and guesswork but I suspect this will also become important by the end of the series. If you've read this much of the series, you'll obviously want to read this one as well, but unfortunately don't get your hopes up for significant plot advancement. This is another station-keeping book, which is a bit of a disappointment. We haven't gotten major plot advancement since Hawk in 2014, and I'm getting impatient. Thankfully, Lyorn has a release date already (April 9, 2024), and assuming all goes according to the grand plan, there are only two books left after Lyorn (Chreotha and The Last Contract). I'm getting hopeful that we're going to get to see the entire series. Meanwhile, I am very tempted to do a complete re-read of the series to date, probably in series chronological order rather than in publication order (as much as that's possible given the fractured timelines of Dragon and Tiassa) so that I can see how the pieces fit together. The constant jumping back and forth and allusions to events that have already happened but that we haven't seen yet is hard to keep track of. I'm very glad the Lyorn Records exists. Followed by Lyorn. Rating: 7 out of 10

21 May 2023

Russ Allbery: Review: The Stone Canal

Review: The Stone Canal, by Ken MacLeod
Series: Fall Revolution #2
Publisher: Tor
Copyright: 1996
Printing: January 2001
ISBN: 0-8125-6864-8
Format: Mass market
Pages: 339
The Stone Canal is a sort of halfway sequel to The Star Fraction. They both take place in the same universe, but the characters are almost entirely disjoint. Half of The Stone Canal happens (mostly) well before the previous book and the other half happens well after it. This book does contain spoilers for the ending of The Star Fraction if one connects the events of the two books correctly (which was a bit harder than I thought it should be), so I would not read them out of order. At the start of The Stone Canal, Jon Wilde wakes up on New Mars beside the titular canal, in the middle of nowhere, accompanied only by a robot that says it made him. Wilde remembers dying on Earth; this new life is apparently some type of resurrection. It's a long walk to Ship City, the center of civilization of a place the robot tells him is New Mars. In Ship City, an android named Dee Model has escaped from her owner and is hiding in a bar. There, she meets an AI abolitionist named Tamara, who helps her flee out the back and down the canal on a boat when Wilde walks into the bar and immediately recognizes her. The abolitionists provide her protection and legal assistance to argue her case for freedom from her owner, a man named Reid. The third thread of the story, and about half the book, is Jon Wilde's life on Earth, starting in 1975 and leading up to the chaotic wars, political fracturing, and revolutions that formed the background and plot of The Star Fraction. Eventually that story turns into a full-fledged science fiction setting, but not until the last 60 pages of the book. I successfully read two books in a Ken MacLeod series! Sadly, I'm not sure I enjoyed the experience. I commented in my review of The Star Fraction that the appeal for me in MacLeod's writing was his reputation as a writer of political science fiction. Unfortunately that's been a bust. The characters are certainly political, in the sense that they profess to have strong political viewpoints and are usually members of some radical (often Trotskyite) organization. There are libertarian anarchist societies and lots of political conflict. But there is almost no meaningful political discussion in any of these books so far. The politics are all tactical or background, and often seem to be created by authorial fiat. For example, New Mars is a sort of libertarian anarchy that somehow doesn't have corporations or a strongman ruler, even though the history (when we finally learn it) would have naturally given rise to one or the other (and has, in numerous other SF novels with similar plots). There's a half-assed explanation for this towards the end of the book that I didn't find remotely believable. Another part of the book describes the formation of the libertarian microstate in The Star Fraction, but never answers a "why" or "how" question I had in the previous book in a satisfying way. Somehow people stop caring about control or predictability or stability or traditional hierarchy without any significant difficulties except external threats, in situations of chaos and disorder where historically humans turn to anyone promising firm structure. It's common to joke about MacLeod winning multiple libertarian Prometheus Awards for his fiction despite being a Scottish communist. I'm finding that much less surprising now that I've read more of his books. Whether or not he believes in it himself, he's got the cynical libertarian smugness and hand-waving down pat. What his characters do care deeply about is smoking, drinking, and having casual sex. (There's more political fire here around opposition to anti-smoking laws than there is about any of the society-changing political structures that somehow fall into place.) I have no objections to any of those activities from a moral standpoint, but reading about other people doing them is a snoozefest. The flashback scenes sketch out enough imagined history to satisfy some curiosity from the previous book, but they're mostly about the world's least interesting love triangle, involving two completely unlikable men and lots of tedious jealousy and posturing. The characters in The Stone Canal are, in general, a problem. One of those unlikable men is Wilde, the protagonist for most of the book. Not only did I never warm to him, I never figured out what motivates him or what he cares about. He's a supposedly highly political person who seems to engage in politics with all the enthusiasm of someone filling out tax forms, and is entirely uninterested in explaining to the reader any sort of coherent philosophical approach. The most interesting characters in this book are the women (Annette, Dee Model, Tamara, and, very late in the book, Meg), but other than Dee Model they rarely get much focus from the story. By far the best part of this book is the last 60 pages, where MacLeod finally explains the critical bridge events between Wilde's political history on earth and the New Mars society. I thought this was engrossing, fast-moving, and full of interesting ideas (at least for a 1990s book; many of them feel a bit stale now, 25 years later). It was also frustrating, because this was the book I wanted to have been reading for the previous 270 pages, instead of MacLeod playing coy with his invented history or showing us interminable scenes about Wilde's insecure jealousy over his wife. It's also the sort of book where at one point characters (apparently uniformly male as far as one could tell from the text of the book) get assigned sex slaves, and while MacLeod clearly doesn't approve of this, the plot is reminiscent of a Heinlein novel: the protagonist's sex slave becomes a very loyal permanent female companion who seems to have the same upside for the male character in question. This was unfortunately not the book I was hoping for. I did enjoy the last hundred pages, and it's somewhat satisfying to have the history come together after puzzling over what happened for 200 pages. But I found the characters tedious and annoying and the politics weirdly devoid of anything like sociology, philosophy, or political science. There is the core of a decent 1990s AI and singularity novel here, but the technology is now rather dated and a lot of other people have tackled the same idea with fewer irritating ticks. Not recommended, although I'll probably continue to The Cassini Division because the ending was a pretty great hook for another book. Followed by The Cassini Division. Rating: 5 out of 10

18 May 2023

Antoine Beaupr : A terrible Pixel Tablet

In a strange twist of history, Google finally woke and thought "I know what we need to do! We need to make a TABLET!". So some time soon in 2023, Google will release "The tablet that only Google could make", the Pixel Tablet. Having owned a Samsung Galaxy Tab S5e for a few years, I was very curious to see how this would pan out and especially whether it would be easier to flash than the Samsung. As an aside, I figured I would give that a shot, and within a few days managed to completely brick the device. Awesome. See gts4lvwifi for the painful details of that. In any case, Google made a tablet. I own a Pixel phone and I'm moderately happy with it. It's easy to flash with CalyxOS, maybe this is the promise land of tablets?

Compared with the Samsung But it turns out that the Pixel Tablet pales in comparison with the Samsung tablet, produced 4 years ago, in 2019:
  • it's thicker (8.1mm vs 5.5mm)
  • it's heavier (493g vs 400g)
  • it's not AMOLED (IPS LCD)
  • it doesn't have an SD card reader
  • its camera is worse (8MP vs 13MP, 1080p video instead of 4k)
  • it's more expensive (670EUR vs 410EUR)
What the Pixel tablet has going for it:
  • a slightly more powerful CPU
  • a stylus
  • more storage (128GB or 256GB vs 64GB or 128GB)
  • more RAM (8GB vs 4GB or 6GB)
  • Wifi 6
I guess I should probably wait for the actual device to come out to see reviews and how it stacks up, but so far it's kind of impressive how underwhelming this is. Also note that we're comparing against a very old Samsung tablet here, a fairer comparison might be against the Samsung Galaxy Tab S8. There the sizes are comparable, and the Samsung is more expensive than the Pixel, but then the Pixel has absolutely zero advantages and all the other disadvantages.

The Dock The "Dock" is also worth a little aside. See, the tablet comes with a dock that doubles as a speaker. You can't buy the tablet without the dock. You have to have a dock. I shit you not, actual quote: "Can I purchase a Pixel Tablet without the Charging Speaker Dock? No, you can only purchase the Pixel Tablet with the Charging Speaker Dock." In case you really, really like the dock, "You may purchase additional Charging Speaker Docks separately (coming soon)." And no, they can't all play together, only the dock the tablet is docked into will play audio. The dock is not a Bluetooth speaker, it can only play audio from that one tablet that Google made, this one time. It's also not a battery pack. It's just a charger with speakers in it. Promising e-waste. Again, I hope I'm wrong and that this is going to be a fine tablet. But so far, it looks like it doesn't even come close to whatever Samsung threw over the fence before the apocalypse (remember 2019? were we even born yet?). "The tablet that only Google could make." Amazing. Hopefully no one else gets any bright ideas like this.

16 May 2023

Freexian Collaborators: Monthly report about Debian Long Term Support, April 2023 (by Roberto C. S nchez)

Like each month, have a look at the work funded by Freexian s Debian LTS offering.

Debian LTS contributors In April, 18 contributors have been paid to work on Debian LTS, their reports are available:
  • Abhijith PA did 6.0h (out of 0h assigned and 14.0h from previous period), thus carrying over 8.0h to the next month.
  • Adrian Bunk did 18.0h (out of 16.5h assigned and 24.0h from previous period), thus carrying over 22.5h to the next month.
  • Anton Gladky did 8.0h (out of 9.5h assigned and 5.5h from previous period), thus carrying over 7.0h to the next month.
  • Bastien Roucari s did 17.0h (out of 17.0h assigned and 3.0h from previous period), thus carrying over 3.0h to the next month.
  • Ben Hutchings did 16.0h (out of 12.0h assigned and 12.0h from previous period), thus carrying over 8.0h to the next month.
  • Chris Lamb did 18.0h (out of 18.0h assigned).
  • Dominik George did 0.0h (out of 0h assigned and 20.34h from previous period), thus carrying over 20.34h to the next month.
  • Emilio Pozuelo Monfort did 4.5h (out of 11.0h assigned and 9.5h from previous period), thus carrying over 16.0h to the next month.
  • Guilhem Moulin did 8.5h (out of 8.0h assigned and 12.0h from previous period), thus carrying over 11.5h to the next month.
  • Helmut Grohne did 5.0h (out of 2.5h assigned and 7.5h from previous period), thus carrying over 5.0h to the next month.
  • Lee Garrett did 0.0h (out of 31.5h assigned and 9.0h from previous period), thus carrying over 40.5h to the next month.
  • Markus Koschany did 40.0h (out of 40.0h assigned).
  • Ola Lundqvist did 12.5h (out of 0h assigned and 24.0h from previous period), thus carrying over 11.5h to the next month.
  • Roberto C. S nchez did 8.5h (out of 4.75h assigned and 15.25h from previous period), thus carrying over 11.5h to the next month.
  • Stefano Rivera did 1.0h (out of 0h assigned and 28.0h from previous period), thus carrying over 27.0h to the next month.
  • Sylvain Beucler did 35.0h (out of 40.5h assigned), thus carrying over 5.5h to the next month.
  • Thorsten Alteholz did 14.0h (out of 14.0h assigned).
  • Tobias Frost did 15.0h (out of 15.0h assigned and 1.0h from previous period), thus carrying over 1.0h to the next month.
  • Utkarsh Gupta did 3.5h (out of 11.0h assigned and 18.5h from previous period), thus carrying over 26.0h to the next month.

Evolution of the situation In April, we have released 35 DLAs. The LTS team would like to welcome our newest sponsor, Institut Camille Jordan, a French research lab. Thanks to the support of the many LTS sponsors, the entire Debian community benefits from direct security updates, as well as indirect improvements and collaboration with other members of the Debian community. As part of improving the efficiency of our work and the quality of the security updates we produce, the LTS has continued improving our workflow. Improvements include more consistent tagging of release versions in Git and broader use of continuous integration (CI) to ensure packages are tested thoroughly and consistently. Sponsors and users can rest assured that we work continuously to maintain and improve the already high quality of the work that we do.

Thanks to our sponsors Sponsors that joined recently are in bold.

Next.

Previous.